Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34.206
Filtrar
1.
Neuroimaging Clin N Am ; 34(2): 281-292, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38604712

RESUMO

MR imaging's exceptional capabilities in vascular imaging stem from its ability to visualize and quantify vessel wall features, such as plaque burden, composition, and biomechanical properties. The application of advanced MR imaging techniques, including two-dimensional and three-dimensional black-blood MR imaging, T1 and T2 relaxometry, diffusion-weighted imaging, and dynamic contrast-enhanced MR imaging, wall shear stress, and arterial stiffness, empowers clinicians and researchers to explore the intricacies of vascular diseases. This array of techniques provides comprehensive insights into the development and progression of vascular pathologies, facilitating earlier diagnosis, targeted treatment, and improved patient outcomes in the management of vascular health.


Assuntos
Imagem de Difusão por Ressonância Magnética , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento Tridimensional/métodos , Interpretação de Imagem Assistida por Computador/métodos
2.
Comput Methods Programs Biomed ; 249: 108160, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38583290

RESUMO

BACKGROUND AND OBJECTIVE: Early detection and grading of Diabetic Retinopathy (DR) is essential to determine an adequate treatment and prevent severe vision loss. However, the manual analysis of fundus images is time consuming and DR screening programs are challenged by the availability of human graders. Current automatic approaches for DR grading attempt the joint detection of all signs at the same time. However, the classification can be optimized if red lesions and bright lesions are independently processed since the task gets divided and simplified. Furthermore, clinicians would greatly benefit from explainable artificial intelligence (XAI) to support the automatic model predictions, especially when the type of lesion is specified. As a novelty, we propose an end-to-end deep learning framework for automatic DR grading (5 severity degrees) based on separating the attention of the dark structures from the bright structures of the retina. As the main contribution, this approach allowed us to generate independent interpretable attention maps for red lesions, such as microaneurysms and hemorrhages, and bright lesions, such as hard exudates, while using image-level labels only. METHODS: Our approach is based on a novel attention mechanism which focuses separately on the dark and the bright structures of the retina by performing a previous image decomposition. This mechanism can be seen as a XAI approach which generates independent attention maps for red lesions and bright lesions. The framework includes an image quality assessment stage and deep learning-related techniques, such as data augmentation, transfer learning and fine-tuning. We used the architecture Xception as a feature extractor and the focal loss function to deal with data imbalance. RESULTS: The Kaggle DR detection dataset was used for method development and validation. The proposed approach achieved 83.7 % accuracy and a Quadratic Weighted Kappa of 0.78 to classify DR among 5 severity degrees, which outperforms several state-of-the-art approaches. Nevertheless, the main result of this work is the generated attention maps, which reveal the pathological regions on the image distinguishing the red lesions and the bright lesions. These maps provide explainability to the model predictions. CONCLUSIONS: Our results suggest that our framework is effective to automatically grade DR. The separate attention approach has proven useful for optimizing the classification. On top of that, the obtained attention maps facilitate visual interpretation for clinicians. Therefore, the proposed method could be a diagnostic aid for the early detection and grading of DR.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Fundo de Olho
3.
Artif Intell Med ; 151: 102866, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38593684

RESUMO

An echocardiogram is a sophisticated ultrasound imaging technique employed to diagnose heart conditions. The transthoracic echocardiogram, one of the most prevalent types, is instrumental in evaluating significant cardiac diseases. However, interpreting its results heavily relies on the clinician's expertise. In this context, artificial intelligence has emerged as a vital tool for helping clinicians. This study critically analyzes key state-of-the-art research that uses deep learning techniques to automate transthoracic echocardiogram analysis and support clinical judgments. We have systematically organized and categorized articles that proffer solutions for view classification, enhancement of image quality and dataset, segmentation and identification of cardiac structures, detection of cardiac function abnormalities, and quantification of cardiac functions. We compared the performance of various deep learning approaches within each category, identifying the most promising methods. Additionally, we highlight limitations in current research and explore promising avenues for future exploration. These include addressing generalizability issues, incorporating novel AI approaches, and tackling the analysis of rare cardiac diseases.


Assuntos
Aprendizado Profundo , Ecocardiografia , Humanos , Ecocardiografia/métodos , Cardiopatias/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Inteligência Artificial
4.
Breast Cancer Res ; 26(1): 71, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658999

RESUMO

BACKGROUND: To compare the compartmentalized diffusion-weighted models, intravoxel incoherent motion (IVIM) and restriction spectrum imaging (RSI), in characterizing breast lesions and normal fibroglandular tissue. METHODS: This prospective study enrolled 152 patients with 157 histopathologically verified breast lesions (41 benign and 116 malignant). All patients underwent a full-protocol preoperative breast MRI, including a multi-b-value DWI sequence. The diffusion parameters derived from the mono-exponential model (ADC), IVIM model (Dt, Dp, f), and RSI model (C1, C2, C3, C1C2, F1, F2, F3, F1F2) were quantitatively measured and then compared among malignant lesions, benign lesions and normal fibroglandular tissues using Kruskal-Wallis test. The Mann-Whitney U-test was used for the pairwise comparisons. Diagnostic models were built by logistic regression analysis. The ROC analysis was performed using five-fold cross-validation and the mean AUC values were calculated and compared to evaluate the discriminative ability of each parameter or model. RESULTS: Almost all quantitative diffusion parameters showed significant differences in distinguishing malignant breast lesions from both benign lesions (other than C2) and normal fibroglandular tissue (all parameters) (all P < 0.0167). In terms of the comparisons of benign lesions and normal fibroglandular tissues, the parameters derived from IVIM (Dp, f) and RSI (C1, C2, C1C2, F1, F2, F3) showed significant differences (all P < 0.005). When using individual parameters, RSI-derived parameters-F1, C1C2, and C2 values yielded the highest AUCs for the comparisons of malignant vs. benign, malignant vs. normal tissue and benign vs. normal tissue (AUCs = 0.871, 0.982, and 0.863, respectively). Furthermore, the combined diagnostic model (IVIM + RSI) exhibited the highest diagnostic efficacy for the pairwise discriminations (AUCs = 0.893, 0.991, and 0.928, respectively). CONCLUSIONS: Quantitative parameters derived from the three-compartment RSI model have great promise as imaging indicators for the differential diagnosis of breast lesions compared with the bi-exponential IVIM model. Additionally, the combined model of IVIM and RSI achieves superior diagnostic performance in characterizing breast lesions.


Assuntos
Neoplasias da Mama , Mama , Imagem de Difusão por Ressonância Magnética , Humanos , Feminino , Imagem de Difusão por Ressonância Magnética/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Neoplasias da Mama/diagnóstico , Pessoa de Meia-Idade , Adulto , Idoso , Mama/diagnóstico por imagem , Mama/patologia , Estudos Prospectivos , Curva ROC , Interpretação de Imagem Assistida por Computador/métodos , Adulto Jovem , Diagnóstico Diferencial
5.
Sci Rep ; 14(1): 9336, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38653997

RESUMO

Skin cancer is the most prevalent kind of cancer in people. It is estimated that more than 1 million people get skin cancer every year in the world. The effectiveness of the disease's therapy is significantly impacted by early identification of this illness. Preprocessing is the initial detecting stage in enhancing the quality of skin images by removing undesired background noise and objects. This study aims is to compile preprocessing techniques for skin cancer imaging that are currently accessible. Researchers looking into automated skin cancer diagnosis might use this article as an excellent place to start. The fully convolutional encoder-decoder network and Sparrow search algorithm (FCEDN-SpaSA) are proposed in this study for the segmentation of dermoscopic images. The individual wolf method and the ensemble ghosting technique are integrated to generate a neighbour-based search strategy in SpaSA for stressing the correct balance between navigation and exploitation. The classification procedure is accomplished by using an adaptive CNN technique to discriminate between normal skin and malignant skin lesions suggestive of disease. Our method provides classification accuracies comparable to commonly used incremental learning techniques while using less energy, storage space, memory access, and training time (only network updates with new training samples, no network sharing). In a simulation, the segmentation performance of the proposed technique on the ISBI 2017, ISIC 2018, and PH2 datasets reached accuracies of 95.28%, 95.89%, 92.70%, and 98.78%, respectively, on the same dataset and assessed the classification performance. It is accurate 91.67% of the time. The efficiency of the suggested strategy is demonstrated through comparisons with cutting-edge methodologies.


Assuntos
Algoritmos , Dermoscopia , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/classificação , Neoplasias Cutâneas/patologia , Dermoscopia/métodos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Pele/patologia , Pele/diagnóstico por imagem
6.
BMC Med Imaging ; 24(1): 95, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654162

RESUMO

OBJECTIVE: In radiation therapy, cancerous region segmentation in magnetic resonance images (MRI) is a critical step. For rectal cancer, the automatic segmentation of rectal tumors from an MRI is a great challenge. There are two main shortcomings in existing deep learning-based methods that lead to incorrect segmentation: 1) there are many organs surrounding the rectum, and the shape of some organs is similar to that of rectal tumors; 2) high-level features extracted by conventional neural networks often do not contain enough high-resolution information. Therefore, an improved U-Net segmentation network based on attention mechanisms is proposed to replace the traditional U-Net network. METHODS: The overall framework of the proposed method is based on traditional U-Net. A ResNeSt module was added to extract the overall features, and a shape module was added after the encoder layer. We then combined the outputs of the shape module and the decoder to obtain the results. Moreover, the model used different types of attention mechanisms, so that the network learned information to improve segmentation accuracy. RESULTS: We validated the effectiveness of the proposed method using 3773 2D MRI datasets from 304 patients. The results showed that the proposed method achieved 0.987, 0.946, 0.897, and 0.899 for Dice, MPA, MioU, and FWIoU, respectively; these values are significantly better than those of other existing methods. CONCLUSION: Due to time savings, the proposed method can help radiologists segment rectal tumors effectively and enable them to focus on patients whose cancerous regions are difficult for the network to segment. SIGNIFICANCE: The proposed method can help doctors segment rectal tumors, thereby ensuring good diagnostic quality and accuracy.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias Retais , Neoplasias Retais/diagnóstico por imagem , Neoplasias Retais/patologia , Humanos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos , Masculino
7.
Comput Biol Med ; 173: 108353, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38520918

RESUMO

The grading diagnosis of intracranial tumors is a key step in formulating clinical treatment plans and surgical guidelines. To effectively grade the diagnosis of intracranial tumors, this paper proposes a dual path parallel hierarchical model that can automatically grade the diagnosis of intracranial tumors with high accuracy. In this model, prior features of solid tumor mass and intratumoral necrosis are extracted. Then the optimal division of the data set is achieved through multi-feature entropy weight. The multi-modal input is realized by the dual path structure. Multiple features are superimposed and fused to achieve the image grading. The model has been tested on the actual clinical medical images provided by the Second Affiliated Hospital of Dalian Medical University. The experiment shows that the proposed model has good generalization ability, with an accuracy of 0.990. The proposed model can be applied to clinical diagnosis and has practical application prospects.


Assuntos
Neoplasias Encefálicas , Humanos , Entropia , Neoplasias Encefálicas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos
8.
Magn Reson Imaging ; 109: 42-48, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38447629

RESUMO

PURPOSE: To evaluate the performance of high-resolution free-breathing (FB) hepatobiliary phase imaging of the liver using the eXtra-Dimension Golden-angle RAdial Sparse Parallel (XD-GRASP) MRI technique. METHODS: Fifty-eight clinical patients (41 males, mean age = 52.9 ± 12.9) with liver lesions who underwent dynamic contrast-enhanced MRI with a liver-specific contrast agent were prospectively recruited for this study. Both breath-hold volumetric interpolated examination (BH-VIBE) imaging and FB imaging were performed during the hepatobiliary phase. FB images were acquired using a stack-of-stars golden-angle radial sequence and were reconstructed using the XD-GRASP method. Two experienced radiologists blinded to acquisition schemes independently scored the overall image quality, liver edge sharpness, hepatic vessel clarity, conspicuity of lesion, and overall artifact level of each image. The non-parametric paired two-tailed Wilcoxon signed-rank test was used for statistical analysis. RESULTS: Compared to BH-VIBE images, XD-GRASP images received significantly higher scores (P < 0.05) for the liver edge sharpness (4.83 ± 0.45 vs 4.29 ± 0.46), the hepatic vessel clarity (4.64 ± 0.67 vs 4.15 ± 0.56) and the conspicuity of lesion (4.75 ± 0.53 vs 4.31 ± 0.50). There were no significant differences (P > 0.05) between BH-VIBE and XD-GRASP images for the overall image quality (4.61 ± 0.50 vs 4.74 ± 0.47) and the overall artifact level (4.13 ± 0.44 vs 4.05 ± 0.61). CONCLUSION: Compared to conventional BH-VIBE MRI, FB radial acquisition combined with XD-GRASP reconstruction facilitates higher spatial resolution imaging of the liver during the hepatobiliary phase. This enhancement can significantly improve the visualization and evaluation of the liver.


Assuntos
Interpretação de Imagem Assistida por Computador , Respiração , Masculino , Humanos , Adulto , Pessoa de Meia-Idade , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Suspensão da Respiração , Meios de Contraste , Artefatos , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos
9.
Artif Intell Med ; 149: 102782, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462283

RESUMO

Diabetic retinopathy (DR) is the most prevalent cause of visual impairment in adults worldwide. Typically, patients with DR do not show symptoms until later stages, by which time it may be too late to receive effective treatment. DR Grading is challenging because of the small size and variation in lesion patterns. The key to fine-grained DR grading is to discover more discriminating elements such as cotton wool, hard exudates, hemorrhages, microaneurysms etc. Although deep learning models like convolutional neural networks (CNN) seem ideal for the automated detection of abnormalities in advanced clinical imaging, small-size lesions are very hard to distinguish by using traditional networks. This work proposes a bi-directional spatial and channel-wise parallel attention based network to learn discriminative features for diabetic retinopathy grading. The proposed attention block plugged with a backbone network helps to extract features specific to fine-grained DR-grading. This scheme boosts classification performance along with the detection of small-sized lesion parts. Extensive experiments are performed on four widely used benchmark datasets for DR grading, and performance is evaluated on different quality metrics. Also, for model interpretability, activation maps are generated using the LIME method to visualize the predicted lesion parts. In comparison with state-of-the-art methods, the proposed IDANet exhibits better performance for DR grading and lesion detection.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Adulto , Humanos , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/patologia , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos
10.
Histopathology ; 84(7): 1139-1153, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38409878

RESUMO

BACKGROUND: Artificial intelligence (AI) has numerous applications in pathology, supporting diagnosis and prognostication in cancer. However, most AI models are trained on highly selected data, typically one tissue slide per patient. In reality, especially for large surgical resection specimens, dozens of slides can be available for each patient. Manually sorting and labelling whole-slide images (WSIs) is a very time-consuming process, hindering the direct application of AI on the collected tissue samples from large cohorts. In this study we addressed this issue by developing a deep-learning (DL)-based method for automatic curation of large pathology datasets with several slides per patient. METHODS: We collected multiple large multicentric datasets of colorectal cancer histopathological slides from the United Kingdom (FOXTROT, N = 21,384 slides; CR07, N = 7985 slides) and Germany (DACHS, N = 3606 slides). These datasets contained multiple types of tissue slides, including bowel resection specimens, endoscopic biopsies, lymph node resections, immunohistochemistry-stained slides, and tissue microarrays. We developed, trained, and tested a deep convolutional neural network model to predict the type of slide from the slide overview (thumbnail) image. The primary statistical endpoint was the macro-averaged area under the receiver operating curve (AUROCs) for detection of the type of slide. RESULTS: In the primary dataset (FOXTROT), with an AUROC of 0.995 [95% confidence interval [CI]: 0.994-0.996] the algorithm achieved a high classification performance and was able to accurately predict the type of slide from the thumbnail image alone. In the two external test cohorts (CR07, DACHS) AUROCs of 0.982 [95% CI: 0.979-0.985] and 0.875 [95% CI: 0.864-0.887] were observed, which indicates the generalizability of the trained model on unseen datasets. With a confidence threshold of 0.95, the model reached an accuracy of 94.6% (7331 classified cases) in CR07 and 85.1% (2752 classified cases) for the DACHS cohort. CONCLUSION: Our findings show that using the low-resolution thumbnail image is sufficient to accurately classify the type of slide in digital pathology. This can support researchers to make the vast resource of existing pathology archives accessible to modern AI models with only minimal manual annotations.


Assuntos
Neoplasias Colorretais , Aprendizado Profundo , Humanos , Neoplasias Colorretais/patologia , Neoplasias Colorretais/diagnóstico , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos
11.
Br J Radiol ; 97(1156): 868-873, 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38400772

RESUMO

PURPOSE: To evaluate intra-patient and interobserver agreement in patients who underwent liver MRI with gadoxetic acid using two different multi-arterial phase (AP) techniques. METHODS: A total of 154 prospectively enrolled patients underwent clinical gadoxetic acid-enhanced liver MRI twice within 12 months, using two different multi-arterial algorithms: CAIPIRINHA-VIBE and TWIST-VIBE. For every patient, breath-holding time, body mass index, sex, age were recorded. The phase without contrast media and the APs were independently evaluated by two radiologists who quantified Gibbs artefacts, noise, respiratory motion artefacts, and general image quality. Presence or absence of Gibbs artefacts and noise was compared by the McNemar's test. Respiratory motion artefacts and image quality scores were compared using Wilcoxon signed rank test. Interobserver agreement was assessed by Cohen kappa statistics. RESULTS: Compared with TWIST-VIBE, CAIPIRINHA-VIBE images had better scores for every parameter except higher noise score. Triple APs were always acquired with TWIST-VIBE but failed in 37% using CAIPIRINHA-VIBE: 11% have only one AP, 26% have two. Breath-holding time was the only parameter that influenced the success of multi-arterial techniques. TWIST-VIBE images had worst score for Gibbs and respiratory motion artefacts but lower noise score. CONCLUSION: CAIPIRINHA-VIBE images were always diagnostic, but with a failure of triple-AP in 37%. TWIST-VIBE was successful in obtaining three APs in all patients. Breath-holding time is the only parameter which can influence the preliminary choice between CAIPIRINHA-VIBE and TWIST-VIBE algorithm. ADVANCES IN KNOWLEDGE: If the patient is expected to perform good breath-holds, TWIST-VIBE is preferable; otherwise, CAIPIRINHA-VIBE is more appropriate.


Assuntos
Gadolínio DTPA , Aumento da Imagem , Interpretação de Imagem Assistida por Computador , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Meios de Contraste , Suspensão da Respiração , Artefatos , Fígado/diagnóstico por imagem
12.
Eur J Radiol ; 173: 111360, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38342061

RESUMO

PURPOSE: To determine the diagnostic accuracy of volumetric interpolated breath-hold examination sequences with fat suppression in Dixon technique (VIBE-Dixon) for cardiac thrombus detection. METHOD: From our clinical database, we retrospectively identified consecutive patients between 2014 and 2022 who had definite diagnosis or exclusion of cardiac thrombus confirmed by an independent adjudication committee, serving as the reference standard. All patients received 2D-Cine plus 2D-Late-Gadolinium-Enhancement (Cine + LGE) and VIBE-Dixon sequences. Two blinded readers assessed all images for the presence of cardiac thrombus. The diagnostic accuracy of Cine + LGE and VIBE-Dixon was determined and compared. RESULTS: Among 141 MRI studies (116 male, mean age: 61 years) mean image examination time was 28.8 ± 3.1 s for VIBE-Dixon and 23.3 ± 2.5 min for Cine + LGE. Cardiac thrombus was present in 49 patients (prevalence: 35 %). For both readers sensitivity for thrombus detection was significantly higher in VIBE-Dixon compared with Cine + LGE (Reader 1: 96 % vs.73 %, Reader 2: 96 % vs. 78 %, p < 0.01 for both readers), whereas specificity did not differ significantly (Reader 1: 96 % vs. 98 %, Reader 2: 92 % vs. 93 %, p > 0.1). Overall diagnostic accuracy of VIBE-Dixon was higher than for Cine + LGE (95 % vs. 89 %, p = 0.02) and was non-inferior to the reference standard (Delta ≤ 5 % with probability > 95 %). CONCLUSIONS: Biplanar VIBE-Dixon sequences, acquired within a few seconds, provided a very high diagnostic accuracy for cardiac thrombus detection. They could be used as stand-alone sequences to rapidly screen for cardiac thrombus in patients not amenable to lengthy acquisition times.


Assuntos
Meios de Contraste , Trombose , Humanos , Masculino , Pessoa de Meia-Idade , Gadolínio , Estudos Retrospectivos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Trombose/diagnóstico por imagem , Aumento da Imagem/métodos
13.
Magn Reson Med ; 91(6): 2391-2402, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38317286

RESUMO

PURPOSE: Clinical scanners require pulsed CEST sequences to maintain amplifier and specific absorption rate limits. During off-resonant RF irradiation and interpulse delay, the magnetization can accumulate specific relative phases within the pulse train. In this work, we show that these phases are important to consider, as they can lead to unexpected artifacts when no interpulse gradient spoiling is performed during the saturation train. METHODS: We investigated sideband artifacts using a CEST-3D snapshot gradient-echo sequence at 3 T. Initially, Bloch-McConnell simulations were carried out with Pulseq-CEST, while measurements were performed in vitro and in vivo. RESULTS: Sidebands can be hidden in Z-spectra, and their structure becomes clearly visible only at high sampling. Sidebands are further influenced by B0 inhomogeneities and the RF phase cycling within the pulse train. In vivo, sidebands are mostly visible in liquid compartments such as CSF. Multi-pulse sidebands can be suppressed by interpulse gradient spoiling. CONCLUSION: We provide new insights into sidebands occurring in pulsed CEST experiments and show that, similar as in imaging sequences, gradient and RF spoiling play an important role. Gradient spoiling avoids misinterpretations of sidebands as CEST effects especially in liquid environments including pathological tissue or for CEST resonances close to water. It is recommended to simulate pulsed CEST sequences in advance to avoid artifacts.


Assuntos
Aumento da Imagem , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Aumento da Imagem/métodos , Concentração de Íons de Hidrogênio , Interpretação de Imagem Assistida por Computador/métodos
14.
BMC Med Inform Decis Mak ; 24(1): 37, 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38321416

RESUMO

The most common eye infection in people with diabetes is diabetic retinopathy (DR). It might cause blurred vision or even total blindness. Therefore, it is essential to promote early detection to prevent or alleviate the impact of DR. However, due to the possibility that symptoms may not be noticeable in the early stages of DR, it is difficult for doctors to identify them. Therefore, numerous predictive models based on machine learning (ML) and deep learning (DL) have been developed to determine all stages of DR. However, existing DR classification models cannot classify every DR stage or use a computationally heavy approach. Common metrics such as accuracy, F1 score, precision, recall, and AUC-ROC score are not reliable for assessing DR grading. This is because they do not account for two key factors: the severity of the discrepancy between the assigned and predicted grades and the ordered nature of the DR grading scale. This research proposes computationally efficient ensemble methods for the classification of DR. These methods leverage pre-trained model weights, reducing training time and resource requirements. In addition, data augmentation techniques are used to address data limitations, improve features, and improve generalization. This combination offers a promising approach for accurate and robust DR grading. In particular, we take advantage of transfer learning using models trained on DR data and employ CLAHE for image enhancement and Gaussian blur for noise reduction. We propose a three-layer classifier that incorporates dropout and ReLU activation. This design aims to minimize overfitting while effectively extracting features and assigning DR grades. We prioritize the Quadratic Weighted Kappa (QWK) metric due to its sensitivity to label discrepancies, which is crucial for an accurate diagnosis of DR. This combined approach achieves state-of-the-art QWK scores (0.901, 0.967 and 0.944) in the Eyepacs, Aptos, and Messidor datasets.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Médicos , Humanos , Retinopatia Diabética/diagnóstico , Algoritmos , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
15.
Int Ophthalmol ; 44(1): 91, 2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38367192

RESUMO

BACKGROUND: The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images. PROBLEM STATEMENT: Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening. OBJECTIVE: This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF). METHODOLOGY: The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence. RESULTS: The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%). CONCLUSION: The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.


Assuntos
Retinopatia Diabética , Microaneurisma , Humanos , Retinopatia Diabética/diagnóstico , Microaneurisma/diagnóstico , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Fundo de Olho
16.
BMC Med Imaging ; 24(1): 47, 2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-38373915

RESUMO

BACKGROUND: Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS: In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS: We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS: The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.


Assuntos
Algoritmos , Neoplasias da Mama , Humanos , Feminino , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Mama/patologia , Neoplasias da Mama/patologia , Processamento de Imagem Assistida por Computador
17.
Comput Biol Med ; 171: 108116, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38346370

RESUMO

Alzheimer's disease (AD) poses a substantial public health challenge, demanding accurate screening and diagnosis. Identifying AD in its early stages, including mild cognitive impairment (MCI) and healthy control (HC), is crucial given the global aging population. Structural magnetic resonance imaging (sMRI) is essential for understanding the brain's structural changes due to atrophy. While current deep learning networks overlook voxel long-term dependencies, vision transformers (ViT) excel at recognizing such dependencies in images, making them valuable in AD diagnosis. Our proposed method integrates convolution-attention mechanisms in transformer-based classifiers for AD brain datasets, enhancing performance without excessive computing resources. Replacing multi-head attention with lightweight multi-head self-attention (LMHSA), employing inverted residual (IRU) blocks, and introducing local feed-forward networks (LFFN) yields exceptional results. Training on AD datasets with a gradient-centralized optimizer and Adam achieves an impressive accuracy rate of 94.31% for multi-class classification, rising to 95.37% for binary classification (AD vs. HC) and 92.15% for HC vs. MCI. These outcomes surpass existing AD diagnosis approaches, showcasing the model's efficacy. Identifying key brain regions aids future clinical solutions for AD and neurodegenerative diseases. However, this study focused exclusively on the AD Neuroimaging Initiative (ADNI) cohort, emphasizing the need for a more robust, generalizable approach incorporating diverse databases beyond ADNI in future research.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Idoso , Doença de Alzheimer/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Encéfalo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Disfunção Cognitiva/diagnóstico por imagem
18.
NMR Biomed ; 37(5): e5097, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38269568

RESUMO

PURPOSE: Liver T1 mapping techniques typically require long breath holds or long scan time in free-breathing, need correction for B 1 + inhomogeneities and process composite (water and fat) signals. The purpose of this work is to accelerate the multi-slice acquisition of liver water selective T1 (wT1) mapping in a single breath hold, improving the k-space sampling efficiency. METHODS: The proposed continuous inversion-recovery (IR) Look-Locker methodology combines a single-shot gradient echo spiral readout, Dixon processing and a dictionary-based analysis for liver wT1 mapping at 3 T. The sequence parameters were adapted to obtain short scan times. The influence of fat, B 1 + inhomogeneities and TE on the estimation of T1 was first assessed using simulations. The proposed method was then validated in a phantom and in 10 volunteers, comparing it with MRS and the modified Look-Locker inversion-recovery (MOLLI) method. Finally, the clinical feasibility was investigated by comparing wT1 maps with clinical scans in nine patients. RESULTS: The phantom results are in good agreement with MRS. The proposed method encodes the IR-curve for the liver wT1 estimation, is minimally sensitive to B 1 + inhomogeneities and acquires one slice in 1.2 s. The volunteer results confirmed the multi-slice capability of the proposed method, acquiring nine slices in a breath hold of 11 s. The present work shows robustness to B 1 + inhomogeneities ( wT 1 , No B 1 + = 1.07 wT 1 , B 1 + - 45.63 , R 2 = 0.99 ) , good repeatability ( wT 1 , 2 ° = 1 . 0 wT 1 , 1 ° - 2.14 , R 2 = 0.96 ) and is in better agreement with MRS ( wT 1 = 0.92 wT 1 MRS + 103.28 , R 2 = 0.38 ) than is MOLLI ( wT 1 MOLLI = 0.76 wT 1 MRS + 254.43 , R 2 = 0.44 ) . The wT1 maps in patients captured diverse lesions, thus showing their clinical feasibility. CONCLUSION: A single-shot spiral acquisition can be combined with a continuous IR Look-Locker method to perform rapid repeatable multi-slice liver water T1 mapping at a rate of 1.2 s per slice without a B 1 + map. The proposed method is suitable for nine-slice liver clinical applications acquired in a single breath hold of 11 s.


Assuntos
Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Abdome , Respiração , Imagens de Fantasmas , Reprodutibilidade dos Testes , Coração
19.
Med Image Anal ; 93: 103068, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38176357

RESUMO

Advances in the development of largely automated microscopy methods such as MERFISH for imaging cellular structures in mouse brains are providing spatial detection of micron resolution gene expression. While there has been tremendous progress made in the field of Computational Anatomy (CA) to perform diffeomorphic mapping technologies at the tissue scales for advanced neuroinformatic studies in common coordinates, integration of molecular- and cellular-scale populations through statistical averaging via common coordinates remains yet unattained. This paper describes the first set of algorithms for calculating geodesics in the space of diffeomorphisms, what we term space-feature-measure LDDMM, extending the family of large deformation diffeomorphic metric mapping (LDDMM) algorithms to accommodate a space-feature action on marked particles which extends consistently to the tissue scales. It leads to the derivation of a cross-modality alignment algorithm of transcriptomic data to common coordinate systems attached to standard atlases. We represent the brain data as geometric measures, termed as space-feature measures supported by a large number of unstructured points, each point representing a small volume in space and carrying a list of densities of features elements of a high-dimensional feature space. The shape of space-feature measure brain spaces is measured by transforming them by diffeomorphisms. The metric between these measures is obtained after embedding these objects in a linear space equipped with the norm, yielding a so-called "chordal metric".


Assuntos
Mapeamento Encefálico , Encéfalo , Animais , Camundongos , Encéfalo/diagnóstico por imagem , Encéfalo/anatomia & histologia , Mapeamento Encefálico/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Perfilação da Expressão Gênica
20.
NMR Biomed ; 37(4): e5091, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38196195

RESUMO

BACKGROUND: Despite the widespread use of cine MRI for evaluation of cardiac function, existing real-time methods do not easily enable quantification of ventricular function. Moreover, segmented cine MRI assumes periodicity of cardiac motion. We aim to develop a self-gated, cine MRI acquisition scheme with data-driven cluster-based binning of cardiac motion. METHODS: A Cartesian golden-step balanced steady-state free precession sequence with sorted k-space ordering was designed. Image data were acquired with breath-holding. Principal component analysis and k-means clustering were used for binning of cardiac phases. Cluster compactness in the time dimension was assessed using temporal variability, and dispersion in the spatial dimension was assessed using the Calinski-Harabasz index. The proposed and the reference electrocardiogram (ECG)-gated cine methods were compared using a four-point image quality score, SNR and CNR values, and Bland-Altman analyses of ventricular function. RESULTS: A total of 10 subjects with sinus rhythm and 8 subjects with arrhythmias underwent cardiac MRI at 3.0 T. The temporal variability was 45.6 ms (cluster) versus 24.6 ms (ECG-based) (p < 0.001), and the Calinski-Harabasz index was 59.1 ± 9.1 (cluster) versus 22.0 ± 7.1 (ECG based) (p < 0.001). In subjects with sinus rhythm, 100% of the end-systolic and end-diastolic images from both the cluster and reference approach received the highest image quality score of 4. Relative to the reference cine images, the cluster-based multiphase (cine) image quality consistently received a one-point lower score (p < 0.05), whereas the SNR and CNR values were not significantly different (p = 0.20). In cases with arrhythmias, 97.9% of the end-systolic and end-diastolic images from the cluster approach received an image quality score of 3 or more. The mean bias values for biventricular ejection fraction and volumes derived from the cluster approach versus reference cine were negligible. CONCLUSION: ECG-free cine cardiac MRI with data-driven clustering for binning of cardiac motion is feasible and enables quantification of cardiac function.


Assuntos
Interpretação de Imagem Assistida por Computador , Imagem Cinética por Ressonância Magnética , Humanos , Imagem Cinética por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Técnicas de Imagem de Sincronização Cardíaca/métodos , Função Ventricular , Análise por Conglomerados , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...